198 research outputs found

    A true positives theorem for a static race detector

    Get PDF
    RacerD is a static race detector that has been proven to be effective in engineering practice: it has seen thousands of data races fixed by developers before reaching production, and has supported the migration of Facebook's Android app rendering infrastructure from a single-threaded to a multi-threaded architecture. We prove a True Positives Theorem stating that, under certain assumptions, an idealized theoretical version of the analysis never reports a false positive. We also provide an empirical evaluation of an implementation of this analysis, versus the original RacerD. The theorem was motivated in the first case by the desire to understand the observation from production that RacerD was providing remarkably accurate signal to developers, and then the theorem guided further analyzer design decisions. Technically, our result can be seen as saying that the analysis computes an under-approximation of an over-approximation, which is the reverse of the more usual (over of under) situation in static analysis. Until now, static analyzers that are effective in practice but unsound have often been regarded as ad hoc; in contrast, we suggest that, in the future, theorems of this variety might be generally useful in understanding, justifying and designing effective static analyses for bug catching

    RacerD: compositional static race detection

    Get PDF
    Automatic static detection of data races is one of the most basic problems in reasoning about concurrency. We present RacerDā€”a static program analysis for detecting data races in Java programs which is fast, can scale to large code, and has proven effective in an industrial software engineering scenario. To our knowledge, RacerD is the first inter-procedural, compositional data race detector which has been empirically shown to have non-trivial precision and impact. Due to its compositionality, it can analyze code changes quickly, and this allows it to perform continuous reasoning about a large, rapidly changing codebase as part of deployment within a continuous integration ecosystem. In contrast to previous static race detectors, its design favors reporting high-confidence bugs over ensuring their absence. RacerD has been in deployment for over a year at Facebook, where it has flagged over 2500 issues that have been fixed by developers before reaching production. It has been important in enabling the development of new code as well as fixing old code: it helped support the conversion of part of the main Facebook Android app from a single-threaded to a multi-threaded architecture. In this paper we describe RacerDā€™s design, implementation, deployment and impact

    A System of Interaction and Structure II: The Need for Deep Inference

    Full text link
    This paper studies properties of the logic BV, which is an extension of multiplicative linear logic (MLL) with a self-dual non-commutative operator. BV is presented in the calculus of structures, a proof theoretic formalism that supports deep inference, in which inference rules can be applied anywhere inside logical expressions. The use of deep inference results in a simple logical system for MLL extended with the self-dual non-commutative operator, which has been to date not known to be expressible in sequent calculus. In this paper, deep inference is shown to be crucial for the logic BV, that is, any restriction on the ``depth'' of the inference rules of BV would result in a strictly less expressive logical system

    Separation Logic

    Get PDF
    Separation logic is a key development in formal reasoning about programs, opening up new lines of attack on longstanding problems

    Disproving termination with overapproximation

    Get PDF
    When disproving termination using known techniques (e.g. recurrence sets), abstractions that overapproximate the programā€™s transition relation are unsound. In this paper we introduce live abstractions, a natural class of abstractions that can be combined with the recent concept of closed recurrence sets to soundly disprove termination. To demonstrate the practical usefulness of this new approach we show how programs with nonlinear, nondeterministic, and heap-based commands can be shown nonterminating using linear overapproximations

    From start-ups to scale-ups: Opportunities and open problems for static and dynamic program analysis

    Get PDF
    This paper describes some of the challenges and opportunities when deploying static and dynamic analysis at scale, drawing on the authors' experience with the Infer and Sapienz Technologies at Facebook, each of which started life as a research-led start-up that was subsequently deployed at scale, impacting billions of people worldwide. The paper identifies open problems that have yet to receive significant attention from the scientific community, yet which have potential for profound real world impact, formulating these as research questions that, we believe, are ripe for exploration and that would make excellent topics for research projects

    A History of BlockingQueues

    Get PDF
    This paper describes a way to formally specify the behaviour of concurrent data structures. When specifying concurrent data structures, the main challenge is to make specifications stable, i.e., to ensure that they cannot be invalidated by other threads. To this end, we propose to use history-based specifications: instead of describing method behaviour in terms of the object's state, we specify it in terms of the object's state history. A history is defined as a list of state updates, which at all points can be related to the actual object's state. We illustrate the approach on the BlockingQueue hierarchy from the java.util.concurrent library. We show how the behaviour of the interface BlockingQueue is specified, leaving a few decisions open to descendant classes. The classes implementing the interface correctly inherit the specifications. As a specification language, we use a combination of JML and permission-based separation logic, including abstract predicates. This results in an abstract, modular and natural way to specify the behaviour of concurrent queues. The specifications can be used to derive high-level properties about queues, for example to show that the order of elements is preserved. Moreover, the approach can be easily adapted to other concurrent data structures.Comment: In Proceedings FLACOS 2012, arXiv:1209.169

    Why Separation Logic Works

    Get PDF
    One might poetically muse that computers have the essence both of logic and machines. Through the case of the history of Separation Logic, we explore how this assertion is more than idle poetry. Separation Logic works because it merges the software engineerā€™s conceptual model of a programā€™s manipulation of computer memory with the logical model that interprets what sentences in the logic are true, and because it has a proof theory which aids in the crucial problem of scaling the reasoning task. Scalability is a central problem, and some would even say the central problem, in appli- cations of logic in computer science. Separation Logic is an interesting case because of its widespread success in verification tools. For these two senses of modelā€”the engineering/conceptual and the logicalā€”to merge in a genuine sense, each must maintain their norms of use from their home disciplines. When this occurs, both the logic and engineering benefit greatly. Seeking this intersection of two different senses of model provides a strategy for how computer scientists and logicians may be successful. Furthermore, the history of Separation Logic for analysing programs provides a novel case for philosophers of science of how software engineers and computer scientists develop models and the components of such models. We provide three contributions: an exploration of the extent of models merging that is necessary for success in computer science; an introduction to the technical details of Separation Logic, which can be used for reasoning about other exhaustible resources; and an introduction to (a subset of) the problems, process, and results of computer scientists for those outside the field

    A dependent nominal type theory

    Full text link
    Nominal abstract syntax is an approach to representing names and binding pioneered by Gabbay and Pitts. So far nominal techniques have mostly been studied using classical logic or model theory, not type theory. Nominal extensions to simple, dependent and ML-like polymorphic languages have been studied, but decidability and normalization results have only been established for simple nominal type theories. We present a LF-style dependent type theory extended with name-abstraction types, prove soundness and decidability of beta-eta-equivalence checking, discuss adequacy and canonical forms via an example, and discuss extensions such as dependently-typed recursion and induction principles

    How to design a complex behaviour change intervention: experiences from a nutrition-sensitive agriculture trial in rural India

    Get PDF
    Many public health interventions aim to promote healthful behaviours, with varying degrees of success. With a lack of existing empirical evidence on the optimal number or combination of behaviours to promote to achieve a given health outcome, a key challenge in intervention design lies in deciding what behaviours to prioritise, and how best to promote them. We describe how key behaviours were selected and promoted within a multisectoral nutrition-sensitive agriculture intervention that aimed to address maternal and child undernutrition in rural India. First, we formulated a Theory of Change, which outlined our hypothesised impact pathways. To do this, we used the following inputs: existing conceptual frameworks, published empirical evidence, a feasibility study, formative research and the intervention teamā€™s local knowledge. Then, we selected specific behaviours to address within each impact pathway, based on our formative research, behaviour change models, local knowledge and community feedback. As the intervention progressed, we mapped each of the behaviours against our impact pathways and the transtheoretical model of behaviour change, to monitor the balance of behaviours across pathways and along stages of behaviour change. By collectively agreeing on definitions of complex concepts and hypothesised impact pathways, implementing partners were able to communicate clearly between each other and with intervention participants. Our intervention was iteratively informed by continuous review, by monitoring implementation against targets and by integrating community feedback. Impact and process evaluations will reveal whether these approaches are effective for improving maternal and child nutrition, and what the effects are on each hypothesised impact pathway
    • ā€¦
    corecore